Integrating audio and visual cues for speaker friendliness in multimodal speech synthesis
نویسنده
چکیده
This paper investigates interactions between audio and visual cues to friendliness in questions in two perception experiments. In the first experiment, manually edited parametric audio-visual synthesis was used to create the stimuli. Results were consistent with earlier findings in that a late, high final focal accent peak was perceived as friendlier than an earlier, lower focal accent peak. Friendliness was also effectively signaled by visual facial parameters such as a smile, head nod and eyebrow raising synchronized with the final accent. Consistent additive effects were found between the audio and visual cues for the subjects as a group and individually showing that subjects integrate the two modalities. The second experiment used data-driven visual synthesis where the database was recorded by an actor instructed to portray anger and happiness. Friendliness was correlated to the happy database, but the effect was not as strong as for the parametric synthesis.
منابع مشابه
Audio-Visual Correlation Modeling for Speaker Identification and Synthesis
This thesis addresses two major problems of multimodal signal processing using audiovisual correlation modeling: speaker recognition and speaker synthesis. We address the first problem, i.e., the audiovisual speaker recognition problem within an open-set identification framework, where audio (speech) and lip texture (intensity) modalities are fused employing a combination of early and late inte...
متن کاملA new posterior based audio-visual integration method for robust speech recognition
We describe the development of a multistream HMM based audio-visual speech recognition (AVSR) system and a new method for integrating the audio and visual streams using frame level posterior probabilities. This is compared to the standard feature concatenation and weighted product methods in speaker-dependent tests using our own multimodal database, by examining speech recognition robustness to...
متن کاملAcoustic and Visual Cues of Turn-Taking Dynamics in Dyadic Interactions
In this paper we introduce an empirical study of multimodal cues of turn-taking dynamics in a social interaction context. We first identify pauses, gaps and overlapped speech segments in the dyadic conversation dataset. Second, we define two types of measurements, Mean Equalized Energy (MEE) and Animation Level (AL) on the audio and video channels, respectively. Then, we verify the hypothesis t...
متن کاملAn audio-visual approach to simultaneous-speaker speech recognition
Audio-visual speech recognition is an area with great potential to help solve challenging problems in speech processing. Difficulties due to background noises are significantly reduced by the additional information provided by extra visual features. The presence of additional speech from other talkers during recording may be viewed as one of the most difficult sources of noise. This paper prese...
متن کاملProsodic Cues in Multimodal Speech Perception
Potential visual prosodic cues for prominence and phrasing comprising eyebrow movements were manipulated using a system for audio-visual text-to-speech synthesis which has been implemented based on the KTH rule-based synthesis. Two functions of prosody (prominence and phrasing) were tested in two separate experiments. A test sentence, ambiguous in terms of an internal phrase boundary, was used ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007